Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Curr Med Imaging ; 2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38639284

RESUMO

BACKGROUND AND OBJECTIVE: The incidence of stroke is rising, and it is the second major cause of mortality and the third leading cause of disability around the globe. The goal of this study was to rapidly and accurately identify carotid plaques and automatically quantify plaque burden using our automated tracking and segmentation US-video system. METHODS: We collected 88 common carotid artery transection videos (11048 frames) with a history of atherosclerosis or risk factors for atherosclerosis, which were randomly divided into training, test, and validation sets using a 6:3:1 ratio. We first trained different segmentation models to segment the carotid intima and adventitia, and calculate the maximum plaque burden automatically. Finally, we statistically analyzed the plaque burden calculated automatically by the best model and the results of manual labeling by senior sonographers. RESULTS: Of the three Artificial Intelligence (AI) models, the Robust Video Matting (RVM) segmentation model's carotid intima and adventitia Dice Coefficients (DC) were the highest, reaching 0.93 and 0.95, respectively. Moreover, the RVM model has shown the strongest correlation coefficient (0.61±0.28) with senior sonographers, and the diagnostic effectiveness between the RVM model and experts was comparable with paired-t test and Bland-Altman analysis [P= 0.632 and ICC 0.01 (95% CI: -0.24~0.27), respectively]. CONCLUSION: Our findings have indicated that the RVM model can be used in ultrasound carotid video. The RVM model can automatically segment and quantify atherosclerotic plaque burden at the same diagnostic level as senior sonographers. The application of AI to carotid videos offers more precise and effective methods to evaluate carotid atherosclerosis in clinical practice.

2.
Vascular ; : 17085381241246312, 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38656244

RESUMO

OBJECTIVES: Assessment of plaque stenosis severity allows better management of carotid source of stroke. Our objective is to create a deep learning (DL) model to segment carotid intima-media thickness and plaque and further automatically calculate plaque stenosis severity on common carotid artery (CCA) transverse section ultrasound images. METHODS: Three hundred and ninety images from 376 individuals were used to train (235/390, 60%), validate (39/390, 10%), and test (116/390, 30%) on a newly proposed CANet model. We also evaluated the model on an external test set of 115 individuals with 122 images acquired from another hospital. Comparative studies were conducted between our CANet model with four state-of-the-art DL models and two experienced sonographers to re-evaluate the present model's performance. RESULTS: On the internal test set, our CANet model outperformed the four comparative models with Dice values of 95.22% versus 90.15%, 87.48%, 90.22%, and 91.56% on lumen-intima (LI) borders and 96.27% versus 91.40%, 88.94%, 91.19%, and 92.88% on media-adventitia (MA) borders. On the external test set, our model still produced excellent results with a Dice value of 92.41%. Good consistency of stenosis severity calculation was observed between CANet model and experienced sonographers, with Intraclass Correlation Coefficient (ICC) of 0.927 and 0.702, Pearson's Correlation Coefficient of 0.928 and 0.704 on internal and external test set, respectively. CONCLUSIONS: Our CANet model achieved excellent performance in the segmentation of carotid IMT and plaques as well as automated calculation of stenosis severity.

3.
iScience ; 27(4): 109403, 2024 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-38523785

RESUMO

We evaluated the diagnostic performance of a multimodal deep-learning (DL) model for ovarian mass differential diagnosis. This single-center retrospective study included 1,054 ultrasound (US)-detected ovarian tumors (699 benign and 355 malignant). Patients were randomly divided into training (n = 675), validation (n = 169), and testing (n = 210) sets. The model was developed using ResNet-50. Three DL-based models were proposed for benign-malignant classification of these lesions: single-modality model that only utilized US images; dual-modality model that used US images and menopausal status as inputs; and multi-modality model that integrated US images, menopausal status, and serum indicators. After 5-fold cross-validation, 210 lesions were tested. We evaluated the three models using the area under the curve (AUC), accuracy, sensitivity, and specificity. The multimodal model outperformed the single- and dual-modality models with 93.80% accuracy and 0.983 AUC. The Multimodal ResNet-50 DL model outperformed the single- and dual-modality models in identifying benign and malignant ovarian tumors.

4.
Ultrasound Med Biol ; 50(5): 722-728, 2024 05.
Artigo em Inglês | MEDLINE | ID: mdl-38369431

RESUMO

OBJECTIVE: Although ultrasound is a common tool for breast cancer screening, its accuracy is often operator-dependent. In this study, we proposed a new automated deep-learning framework that extracts video-based ultrasound data for breast cancer screening. METHODS: Our framework incorporates DenseNet121, MobileNet, and Xception as backbones for both video- and image-based models. We used data from 3907 patients to train and evaluate the models, which were tested using video- and image-based methods, as well as reader studies with human experts. RESULTS: This study evaluated 3907 female patients aged 22 to 86 years. The results indicated that the MobileNet video model achieved an AUROC of 0.961 in prospective data testing, surpassing the DenseNet121 video model. In real-world data testing, it demonstrated an accuracy of 92.59%, outperforming both the DenseNet121 and Xception video models, and exceeding the 76.00% to 85.60% accuracy range of human experts. Additionally, the MobileNet video model exceeded the performance of image models and other video models across all evaluation metrics, including accuracy, sensitivity, specificity, F1 score, and AUC. Its exceptional performance, particularly suitable for resource-limited clinical settings, demonstrates its potential for clinical application in breast cancer screening. CONCLUSIONS: The level of expertise reached by the video models was greater than that achieved by image-based models. We have developed an artificial intelligence framework based on videos that may be able to aid breast cancer diagnosis and alleviate the shortage of experienced experts.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Humanos , Feminino , Neoplasias da Mama/diagnóstico por imagem , Inteligência Artificial , Estudos Prospectivos , Ultrassonografia
5.
Comput Methods Programs Biomed ; 245: 108039, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38266556

RESUMO

BACKGROUND: The risk of ductal carcinoma in situ (DCIS) identified by biopsy often increases during surgery. Therefore, confirming the DCIS grade preoperatively is necessary for clinical decision-making. PURPOSE: To train a three-classification deep learning (DL) model based on ultrasound (US), combining clinical data, mammography (MG), US, and core needle biopsy (CNB) pathology to predict low-grade DCIS, intermediate-to-high-grade DCIS, and upstaged DCIS. MATERIALS AND METHODS: Data of 733 patients with 754 DCIS cases confirmed by biopsy were retrospectively collected from May 2013 to June 2022 (N1), and other data (N2) were confirmed by biopsy as low-grade DCIS. The lesions were randomly divided into training (n=471), validation (n=142), and test (n = 141) sets to establish the DCIS-Net. Information on the DCIS-Net, clinical (age and sign), US (size, calcifications, type, breast imaging reporting and data system [BI-RADS]), MG (microcalcifications, BI-RADS), and CNB pathology (nuclear grade, architectural features, and immunohistochemistry) were collected. Logistic regression and random forest analyses were conducted to develop Multimodal DCIS-Net to calculate the specificity, sensitivity, accuracy, receiver operating characteristic curve, and area under the curve (AUC). RESULTS: In the test set of N1, the accuracy and AUC of the multimodal DCIS-Net were 0.752-0.766 and 0.859-0.907 in the three-classification task, respectively. The accuracy and AUC for discriminating DCIS from upstaged DCIS were 0.751-0.780 and 0.829-0.861, respectively. In the test set of N2, the accuracy and AUC of discriminating low-grade DCIS from upstaged low-grade DCIS were 0.769-0.987 and 0.818-0.939, respectively. DL was ranked from one to five in the importance of features in the multimodal-DCIS-Net. CONCLUSION: By developing the DCIS-Net and integrating it with multimodal information, diagnosing low-grade DCIS, intermediate-to high-grade DCIS, and upstaged DCIS is possible. It can also be used to distinguish DCIS from upstaged DCIS and low-grade DCIS from upstaged low-grade DCIS, which could pave the way for the DCIS clinical workflow.


Assuntos
Neoplasias da Mama , Calcinose , Carcinoma Ductal de Mama , Carcinoma Intraductal não Infiltrante , Patologia Cirúrgica , Humanos , Feminino , Carcinoma Intraductal não Infiltrante/diagnóstico por imagem , Carcinoma Intraductal não Infiltrante/cirurgia , Estudos Retrospectivos , Mamografia , Neoplasias da Mama/diagnóstico por imagem
6.
BMC Med Inform Decis Mak ; 24(1): 1, 2024 01 02.
Artigo em Inglês | MEDLINE | ID: mdl-38166852

RESUMO

BACKGROUND: The application of artificial intelligence (AI) in the ultrasound (US) diagnosis of breast cancer (BCa) is increasingly prevalent. However, the impact of US-probe frequencies on the diagnostic efficacy of AI models has not been clearly established. OBJECTIVES: To explore the impact of using US-video of variable frequencies on the diagnostic efficacy of AI in breast US screening. METHODS: This study utilized different frequency US-probes (L14: frequency range: 3.0-14.0 MHz, central frequency 9 MHz, L9: frequency range: 2.5-9.0 MHz, central frequency 6.5 MHz and L13: frequency range: 3.6-13.5 MHz, central frequency 8 MHz, L7: frequency range: 3-7 MHz, central frequency 4.0 MHz, linear arrays) to collect breast-video and applied an entropy-based deep learning approach for evaluation. We analyzed the average two-dimensional image entropy (2-DIE) of these videos and the performance of AI models in processing videos from these different frequencies to assess how probe frequency affects AI diagnostic performance. RESULTS: The study found that in testing set 1, L9 was higher than L14 in average 2-DIE; in testing set 2, L13 was higher in average 2-DIE than L7. The diagnostic efficacy of US-data, utilized in AI model analysis, varied across different frequencies (AUC: L9 > L14: 0.849 vs. 0.784; L13 > L7: 0.920 vs. 0.887). CONCLUSION: This study indicate that US-data acquired using probes with varying frequencies exhibit diverse average 2-DIE values, and datasets characterized by higher average 2-DIE demonstrate enhanced diagnostic outcomes in AI-driven BCa diagnosis. Unlike other studies, our research emphasizes the importance of US-probe frequency selection on AI model diagnostic performance, rather than focusing solely on the AI algorithms themselves. These insights offer a new perspective for early BCa screening and diagnosis and are of significant for future choices of US equipment and optimization of AI algorithms.


The research on artificial intelligence-assisted breast diagnosis often relies on static images or dynamic videos obtained from ultrasound probes with different frequencies. However, the effect of frequency-induced image variations on the diagnostic performance of artificial intelligence models remains unclear. In this study, we aimed to explore the impact of using ultrasound images with variable frequencies on AI's diagnostic efficacy in breast ultrasound screening. Our approach involved employing a video and entropy-based feature breast network to compare the diagnostic efficiency and average two-dimensional image entropy of the L14 (frequency range: 3.0-14.0 MHz, central frequency 9 MHz), L9 (frequency range: 2.5-9.0 MHz, central frequency 6.5 MHz) linear array probe and L13 (frequency range: 3.6-13.5 MHz, central frequency 8 MHz), and L7 (frequency range: 3-7 MHz, central frequency 4.0 MHz) linear array probes. The results revealed that the diagnostic efficiency of AI models differed based on the frequency of the ultrasound probe. It is noteworthy that ultrasound images acquired with different frequency probes exhibit different average two-dimensional image entropy, while higher average two-dimensional image entropy positively affect the diagnostic performance of the AI model. We concluded that a dataset with higher average two-dimensional image entropy is associated with superior diagnostic efficacy for AI-based breast diagnosis. These findings contribute to a better understanding of how ultrasound image variations impact AI-assisted breast diagnosis, potentially leading to improved breast cancer screening outcomes.


Assuntos
Inteligência Artificial , Neoplasias da Mama , Humanos , Feminino , Entropia , Ultrassonografia , Neoplasias da Mama/diagnóstico por imagem , Algoritmos
7.
Postgrad Med J ; 100(1182): 228-236, 2024 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-38142286

RESUMO

PURPOSE: We aimed to develop an artificial intelligence (AI) model based on transrectal ultrasonography (TRUS) images of biopsy needle tract (BNT) tissues for predicting prostate cancer (PCa) and to compare the PCa diagnostic performance of the radiologist model and clinical model. METHODS: A total of 1696 2D prostate TRUS images were involved from 142 patients between July 2021 and May 2022. The ResNet50 network model was utilized to train classification models with different input methods: original image (Whole model), BNT (Needle model), and combined image [Feature Pyramid Networks (FPN) model]. The training set, validation set, and test set were randomly assigned, then randomized 5-fold cross-validation between the training set and validation set was performed. The diagnostic effectiveness of AI models and image combination was accessed by an independent testing set. Then, the optimal AI model and image combination were selected to compare the diagnostic efficacy with that of senior radiologists and the clinical model. RESULTS: In the test set, the area under the curve, specificity, and sensitivity of the FPN model were 0.934, 0.966, and 0.829, respectively; the diagnostic efficacy was improved compared with the Whole and Needle models, with statistically significant differences (P < 0.05), and was better than that of senior radiologists (area under the curve: 0.667). The FPN model detected more PCa compared with senior physicians (82.9% vs. 55.8%), with a 61.3% decrease in the false-positive rate and a 23.2% increase in overall accuracy (0.887 vs. 0.655). CONCLUSION: The proposed FPN model can offer a new method for prostate tissue classification, improve the diagnostic performance, and may be a helpful tool to guide prostate biopsy.


Assuntos
Inteligência Artificial , Neoplasias da Próstata , Masculino , Humanos , Neoplasias da Próstata/diagnóstico por imagem , Próstata/diagnóstico por imagem , Próstata/patologia , Biópsia , Ultrassonografia/métodos
8.
Heliyon ; 9(8): e19253, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37664701

RESUMO

Purpose: The objective of this research was to investigate the efficacy of various parameter combinations of Convolutional Neural Networks (CNNs) models, namely MobileNet and DenseNet121, and different input image resolutions (REZs) ranging from 64×64 to 512×512 pixels, for diagnosing breast cancer. Materials and methods: During the period of June 2015 to November 2020, two hospitals were involved in the collection of two-dimensional ultrasound breast images for this retrospective multicenter study. The diagnostic performance of the computer models MobileNet and DenseNet 121 was compared at different resolutions. Results: The results showed that MobileNet had the best breast cancer diagnosis performance at 320×320pixel REZ and DenseNet121 had the best breast cancer diagnosis performance at 448×448pixel REZ. Conclusion: Our study reveals a significant correlation between image resolution and breast cancer diagnosis accuracy. Through the comparison of MobileNet and DenseNet121, it is highlighted that lightweight neural networks (LW-CNNs) can achieve model performance similar to or even slightly better than large neural networks models (HW-CNNs) in ultrasound images, and LW-CNNs' prediction time per image is lower.

9.
BMC Med Inform Decis Mak ; 23(1): 174, 2023 09 04.
Artigo em Inglês | MEDLINE | ID: mdl-37667320

RESUMO

BACKGROUND: This retrospective study aims to validate the effectiveness of artificial intelligence (AI) to detect and classify non-mass breast lesions (NMLs) on ultrasound (US) images. METHODS: A total of 228 patients with NMLs and 596 volunteers without breast lesions on US images were enrolled in the study from January 2020 to December 2022. The pathological results served as the gold standard for NMLs. Two AI models were developed to accurately detect and classify NMLs on US images, including DenseNet121_448 and MobileNet_448. To evaluate and compare the diagnostic performance of AI models, the area under the curve (AUC), accuracy, specificity and sensitivity was employed. RESULTS: A total of 228 NMLs patients confirmed by postoperative pathology with 870 US images and 596 volunteers with 1003 US images were enrolled. In the detection experiment, the MobileNet_448 achieved the good performance in the testing set, with the AUC, accuracy, sensitivity, and specificity were 0.999 (95%CI: 0.997-1.000),96.5%,96.9% and 96.1%, respectively. It was no statistically significant compared to DenseNet121_448. In the classification experiment, the MobileNet_448 model achieved the highest diagnostic performance in the testing set, with the AUC, accuracy, sensitivity, and specificity were 0.837 (95%CI: 0.990-1.000), 70.5%, 80.3% and 74.6%, respectively. CONCLUSIONS: This study suggests that the AI models, particularly MobileNet_448, can effectively detect and classify NMLs in US images. This technique has the potential to improve early diagnostic accuracy for NMLs.


Assuntos
Inteligência Artificial , Mama , Humanos , Estudos Retrospectivos , Ultrassonografia , Área Sob a Curva
10.
Comput Methods Programs Biomed ; 235: 107527, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37086704

RESUMO

BACKGROUND AND OBJECTIVE: The value of implementing artificial intelligence (AI) on ultrasound screening for thyroid cancer has been acknowledged, with numerous early studies confirming AI might help physicians acquire more accurate diagnoses. However, the black box nature of AI's decision-making process makes it difficult for users to grasp the foundation of AI's predictions. Furthermore, explainability is not only related to AI performance, but also responsibility and risk in medical diagnosis. In this paper, we offer Explainer, an intrinsically explainable framework that can categorize images and create heatmaps highlighting the regions on which its prediction is based. METHODS: A dataset of 19341 thyroid ultrasound images with pathological results and physician-annotated TI-RADS features is used to train and test the robustness of the proposed framework. Then we conducted a benign-malignant classification study to determine whether physicians perform better with the assistance of an explainer than they do alone or with Gradient-weighted Class Activation Mapping (Grad-CAM). RESULTS: Reader studies show that the Explainer can achieve a more accurate diagnosis while explaining heatmaps, and that physicians' performances are improved when assisted by the Explainer. Case study results confirm that the Explainer is capable of locating more reasonable and feature-related regions than the Grad-CAM. CONCLUSIONS: The Explainer offers physicians a tool to understand the basis of AI predictions and evaluate their reliability, which has the potential to unbox the "black box" of medical imaging AI.


Assuntos
Médicos , Neoplasias da Glândula Tireoide , Humanos , Inteligência Artificial , Reprodutibilidade dos Testes , Ultrassonografia , Neoplasias da Glândula Tireoide/diagnóstico por imagem
11.
iScience ; 26(1): 105692, 2023 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-36570770

RESUMO

The research of AI-assisted breast diagnosis has primarily been based on static images. It is unclear whether it represents the best diagnosis image.To explore the method of capturing complementary responsible frames from breast ultrasound screening by using artificial intelligence. We used feature entropy breast network (FEBrNet) to select responsible frames from breast ultrasound screenings and compared the diagnostic performance of AI models based on FEBrNet-recommended frames, physician-selected frames, 5-frame interval-selected frames, all frames of video, as well as that of ultrasound and mammography specialists. The AUROC of AI model based on FEBrNet-recommended frames outperformed other frame set based AI models, as well as ultrasound and mammography physicians, indicating that FEBrNet can reach level of medical specialists in frame selection.FEBrNet model can extract video responsible frames for breast nodule diagnosis, whose performance is equivalent to the doctors selected responsible frames.

12.
Comput Methods Programs Biomed ; 226: 107170, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36272307

RESUMO

PURPOSE: To investigate if artificial intelligence can identify fetus intracranial structures in pregnancy week 11-14; to provide an automated method of standard and non-standard sagittal view classification in obstetric ultrasound examination METHOD AND MATERIALS: We proposed a newly designed scheme based on deep learning (DL) - Fetus Framework to identify nine fetus intracranial structures: thalami, midbrain, palate, 4th ventricle, cisterna magna, nuchal translucency (NT), nasal tip, nasal skin, and nasal bone. Fetus Framework was trained and tested on a dataset of 1528 2D sagittal-view ultrasound images from 1519 females collected from Shenzhen People's Hospital. Results from Fetus Framework were further used for standard/non-standard (S-NS) plane classification, a key step for NT measurement and Down Syndrome assessment. S-NS classification was also tested with 156 images from the Longhua branch of Shenzhen People's Hospital. Sensitivity, specificity, and area under the curve (AUC) were evaluated for comparison among Fetus Framework, three classic DL models, and human experts with 1-, 3- and 5-year ultrasound training. Furthermore, 4 physicians with more than 5 years of experience conducted a reader study of diagnosing fetal malformation on a dataset of 316 standard images confirmed by the Fetus framework and another dataset of 316 standard images selected by physicians. Accuracy, sensitivity, specificity, precision, and F1-Score of physicians' diagnosis on both sets are compared. RESULTS: Nine intracranial structures identified by Fetus Framework in validation are all consistent with that of senior radiologists. For S-NS sagittal view identification, Fetus Framework achieved an AUC of 0.996 (95%CI: 0.987, 1.000) in internal test, at par with classic DL models. In external test, FF reaches an AUC of 0.974 (95%CI: 0.952, 0.995), while ResNet-50 arrives at AUC∼0.883, 95% CI 0.828-0.939, Xception AUC∼0.890, 95% CI 0.834-0.946, and DenseNet-121 AUC∼0.894, 95% CI 0.839-0.949. For the internal test set, the sensitivity and specificity of the proposed framework are (0.905, 1), while the first-, third-, and fifth-year clinicians are (0.619, 0.986), (0.690, 0.958), and (0.798, 0.986), respectively. For the external test set, the sensitivity and specificity of FF is (0.989, 0.797), and first-, third-, and fifth-year clinicians are (0.533, 0.875), (0.609, 0.844), and (0.663, 0.781), respectively.On the fetal malformation classification task, all physicians achieved higher accuracy and F1-Score on Fetus selected standard images with statistical significance (p < 0.01). CONCLUSION: We proposed a new deep learning-based Fetus Framework for identifying key fetus intracranial structures. The framework was tested on data from two different medical centers. The results show consistency and improvement from classic models and human experts in standard and non-standard sagittal view classification during pregnancy week 11-13+6. CLINICAL RELEVANCE/APPLICATION: With further refinement in larger population, the proposed model can improve the efficiency and accuracy of early pregnancy test using ultrasound examination.


Assuntos
Aprendizado Profundo , Gravidez , Feminino , Humanos , Inteligência Artificial , Sensibilidade e Especificidade , Ultrassonografia , Feto/diagnóstico por imagem
13.
Front Oncol ; 12: 869421, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35875151

RESUMO

Purpose: The purpose of this study was to explore the performance of different parameter combinations of deep learning (DL) models (Xception, DenseNet121, MobileNet, ResNet50 and EfficientNetB0) and input image resolutions (REZs) (224 × 224, 320 × 320 and 488 × 488 pixels) for breast cancer diagnosis. Methods: This multicenter study retrospectively studied gray-scale ultrasound breast images enrolled from two Chinese hospitals. The data are divided into training, validation, internal testing and external testing set. Three-hundreds images were randomly selected for the physician-AI comparison. The Wilcoxon test was used to compare the diagnose error of physicians and models under P=0.05 and 0.10 significance level. The specificity, sensitivity, accuracy, area under the curve (AUC) were used as primary evaluation metrics. Results: A total of 13,684 images of 3447 female patients are finally included. In external test the 224 and 320 REZ achieve the best performance in MobileNet and EfficientNetB0 respectively (AUC: 0.893 and 0.907). Meanwhile, 448 REZ achieve the best performance in Xception, DenseNet121 and ResNet50 (AUC: 0.900, 0.883 and 0.871 respectively). In physician-AI test set, the 320 REZ for EfficientNetB0 (AUC: 0.896, P < 0.1) is better than senior physicians. Besides, the 224 REZ for MobileNet (AUC: 0.878, P < 0.1), 448 REZ for Xception (AUC: 0.895, P < 0.1) are better than junior physicians. While the 448 REZ for DenseNet121 (AUC: 0.880, P < 0.05) and ResNet50 (AUC: 0.838, P < 0.05) are only better than entry physicians. Conclusion: Based on the gray-scale ultrasound breast images, we obtained the best DL combination which was better than the physicians.

14.
J Clin Ultrasound ; 50(2): 296-301, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35038176

RESUMO

OBJECTIVE: To evaluate if an automatic classification of rheumatoid arthritis (RA) metacarpophalangeal joint conditions in ultrasound images is feasible by deep learning (DL) method, to provide a more objective, automated, and fast way of RA diagnosis in clinical setting. MATERIALS AND METHODS: DenseNet-based DL model was used and both training and testing are implemented in TensorFlow 1.13.1 with Keras DL libraries. The area under curve (AUC), accuracy, sensitivity, and specificity values with 95% CIs were reported. The statistical analysis was performed by using scikit-learn libraries in Python 3.7. RESULTS: A total of 1337 RA ultrasound images were acquired from 208 patients, the number of images is 313, 657, 178, and 189 in OESS Grade L0, L1, L2, and L3, respectively. In Classification Scenario 1 SP-no versus SP-yes, three experiments with region of interest of size 192 × 448 (Group 1), 96 × 224 (Group 2), and 96 × 224 stacked with pre-segmented annotated mask of SP area (Group 3) as input achieve an AUC of 0.863 (95% CI: 0.809, 0.917), 0.861 (95% CI: 0.805, 0.916), and 0.886 (95% CI: 0.836, 0.936), respectively. In Classification Scenario 2 Healthy versus Diseased, experiments in Group 1, Group 2 and Group 3 achieve an AUC of 0.848 (95% CI: 0.799, 0.896), 0.864 (95% CI: 0.819, 0.909), and 0.916 (95% CI: 0.883, 0.952), respectively. CONCLUSION: We combined DenseNet model with ultrasound images for RA condition assessment. The feasibility of using DL to create an automatic RA condition classification system was also demonstrated. The proposed method can be an alternative to the initial screening of RA patients.


Assuntos
Artrite Reumatoide , Aprendizado Profundo , Sinovite , Artrite Reumatoide/diagnóstico por imagem , Proliferação de Células , Humanos , Articulação Metacarpofalângica/diagnóstico por imagem , Ultrassonografia
15.
Eur Radiol ; 31(7): 4991-5000, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33404698

RESUMO

OBJECTIVES: To investigate how a DL model makes decisions in lesion classification with a newly defined region of evidence (ROE) by incorporating "explainable AI" (xAI) techniques. METHODS: A data set of 785 2D breast ultrasound images acquired from 367 females. The DenseNet-121 was used to classify whether the lesion is benign or malignant. For performance assessment, classification results are evaluated by calculating accuracy, sensitivity, specificity, and receiver operating characteristic for experiments of both coarse and fine regions of interest (ROIs). The area under the curve (AUC) was evaluated, and the true-positive, false-positive, true-negative, and false-negative results with breakdown in high, medium, and low resemblance on test sets were also reported. RESULTS: The two models with coarse and fine ROIs of ultrasound images as input achieve an AUC of 0.899 and 0.869, respectively. The accuracy, sensitivity, and specificity of the model with coarse ROIs are 88.4%, 87.9%, and 89.2%, and with fine ROIs are 86.1%, 87.9%, and 83.8%, respectively. The DL model captures ROE with high resemblance of physicians' consideration as they assess the image. CONCLUSIONS: We have demonstrated the effectiveness of using DenseNet to classify breast lesions with limited quantity of 2D grayscale ultrasound image data. We have also proposed a new ROE-based metric system that can help physicians and patients better understand how AI makes decisions in reading images, which can potentially be integrated as a part of evidence in early screening or triaging of patients undergoing breast ultrasound examinations. KEY POINTS: • The two models with coarse and fine ROIs of ultrasound images as input achieve an AUC of 0.899 and 0.869, respectively. The accuracy, sensitivity, and specificity of the model with coarse ROIs are 88.4%, 87.9%, and 89.2%, and with fine ROIs are 86.1%, 87.9%, and 83.8%, respectively. • The first model with coarse ROIs is slightly better than the second model with fine ROIs according to these evaluation metrics. • The results from coarse ROI and fine ROI are consistent and the peripheral tissue is also an impact factor in breast lesion classification.


Assuntos
Neoplasias da Mama , Mama , Inteligência Artificial , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Projetos Piloto , Sensibilidade e Especificidade , Ultrassonografia
16.
Nat Commun ; 8(1): 1934, 2017 12 05.
Artigo em Inglês | MEDLINE | ID: mdl-29203839

RESUMO

Recent development of spectroscopic techniques based on quantum states of light can precipitate many breakthroughs in observing and controlling light-matter interactions in biological materials on a fundamental quantum level. For this reason, the generation of entangled light in biologically produced fluorescent proteins would be promising because of their biocompatibility. Here we demonstrate the generation of polarization-entangled two-photon state through spontaneous four-wave mixing in enhanced green fluorescent proteins. The reconstructed density matrix indicates that the entangled state is subject to decoherence originating from two-photon absorption. However, the prepared state is less sensitive to environmental decoherence because of the protective ß-barrel structure that encapsulates the fluorophore in the protein. We further explore the quantumness, including classical and quantum correlations, of the state in the decoherence environment. Our method for photonic entanglement generation may have potential for developing quantum spectroscopic techniques and quantum-enhanced measurements in biological materials.


Assuntos
Proteínas de Fluorescência Verde , Fótons , Análise Espectral
17.
Sci Rep ; 6: 24344, 2016 Apr 14.
Artigo em Inglês | MEDLINE | ID: mdl-27076032

RESUMO

Recent studies in quantum biology suggest that quantum mechanics help us to explore quantum processes in biological system. Here, we demonstrate generation of photon pairs through spontaneous four-wave mixing process in naturally occurring fluorescent proteins. We develop a general empirical method for analyzing the relative strength of nonlinear optical interaction processes in five different organic fluorophores. Our results indicate that the generation of photon pairs in green fluorescent proteins is subject to less background noises than in other fluorophores, leading to a coincidence-to-accidental ratio ~145. As such proteins can be genetically engineered and fused to many biological cells, our experiment enables a new platform for quantum information processing in a biological environment such as biomimetic quantum networks and quantum sensors.


Assuntos
Proteínas de Fluorescência Verde/metabolismo , Substâncias Luminescentes/metabolismo , Fenômenos Ópticos , Fótons
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...